23 research outputs found

    Evaluation of room-temperature semiconductor detectors for ultrahigh resolution pet imaging

    Get PDF
    The evaluation of a room-temperature semiconductor detector for ultrahigh resolution positron emission tomography (PET) imaging is presented. The approach is based on the use of a CdTe semiconductor detector. The detector is a 2mm thick cadmium telluride (CdTe) detector with a pixel pitch of 350μm x 350μm; it is bump bonded to an energy-resolved photo-counting (ERPC) readout applied-specific integrated circuit (ASIC). It can be demonstrated that this configuration yields depth-of-interaction (DOI) information. Two ways of extracting DOI information are presented. A prototype PET system based on this detector has been developed. The corresponding system calibration algorithm is presented for the PET system, which considers the DOI information. The validity of the two proposed methods of extracting DOI information has been studied by doing the following experiment. The measurements were made by using a Co-57 point source with an active spherical area of diameter 0.25mm. The beam entered the sensor at an angle of ~48.8 degrees to the surface; results showed that the beam passed through 5 pixels before exiting the bottom of the sensor. The validity of these two methods of extracting DOI information was further demonstrated by development of a related PET system calibration algorithm. Future work will address image reconstruction based on results from this PET system calibration algorithm

    Learning to map between domains

    Get PDF
    Humans consume visual content avidly. We are in the midst of an imaging revolution enabled by inexpensive digital cameras and the internet. Almost everyone's cell phone has a camera. The photos taken by these cameras are shared massively and rapidly on the internet. However, there is an asymmetry: Each individual can consume only limited visual content in his limited lifetime, such that only a chosen few are talented enough to both express and understand something unseen visually and effectively. The rest of us try to understand and express something unseen by translating them to something seen before. Similarly, in the medical image field and radiological science, tens of thousands of medical images (MRI, CT, etc) of patients are taken. These medical images need to be studied and interpreted. In this dissertation, we investigate a number of data-driven approaches for mapping from an 'unseen' or hard to understand domain to a 'seen' or easy-to-understand domain. Our work includes mapping between two image domains and mapping from an image domain to a language domain, which in computer vision are called, respectively, image-to-image translation and image captioning. The presented methods not only help users to easily and accurately synthesize useful photos, but also enable new visual and linguistic effects not possible before this work. In the clinical diagnosis, these approaches can improve the accuracy and efficiency of the diagnosis process for the experienced radiologist. What's more, the approach of mapping from image domain to text domain can mimic the work of the experienced radiologist for automatic medical report generation. Part I: This part describes image segmentation, which can be treated as a special case of image-to-image translation. This part includes two works. The first work solves the anisotropic resolution problem for 3D medical image semantic segmentation in the Appendix A. The second work describes our US patented cross-domain medical image segmentation. The first domain has labels while the second domain has no labels; by designing a special domain mapping, we enable image semantic segmentation on the second domain. Both of these works can improve computer aided medical image interpretation and help the radiologist read the medical images more efficiently and accurately. Part II: In the clinical diagnosis, in order to combine the advantages of multiple medical imaging modalities together, medical image registrations or cross-domain image translation is needed. A crucial requirement for both is one to one correspondence. Because the medical images from multiple image modalities (such as MRI, CT) are from the same patients. This part presents learning a self-inverse network to realize one-to-one mapping for both paired and unpaired image-to-image translation. Part III: In the clinical diagnosis, the final output of the diagnosis is in text domain(such as medical report, medical prescriptions etc). Since medical report writing based on medical image can be error-prone for inexperienced physicians, and time-consuming and tedious for experienced physicians, automatic generation of medical image report can make this tedious and difficult task efficient. This part expands to learn the mapping from the image domain to the language domain. Specifically, the mapping is done by learning a language representation to form the language domain

    The Liver Tumor Segmentation Benchmark (LiTS)

    Full text link
    In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LITS) organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2016 and International Conference On Medical Image Computing Computer Assisted Intervention (MICCAI) 2017. Twenty four valid state-of-the-art liver and liver tumor segmentation algorithms were applied to a set of 131 computed tomography (CT) volumes with different types of tumor contrast levels (hyper-/hypo-intense), abnormalities in tissues (metastasectomie) size and varying amount of lesions. The submitted algorithms have been tested on 70 undisclosed volumes. The dataset is created in collaboration with seven hospitals and research institutions and manually reviewed by independent three radiologists. We found that not a single algorithm performed best for liver and tumors. The best liver segmentation algorithm achieved a Dice score of 0.96(MICCAI) whereas for tumor segmentation the best algorithm evaluated at 0.67(ISBI) and 0.70(MICCAI). The LITS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.Comment: conferenc

    Nickel-Catalyzed Cyanation of Aryl Triflates Using Acetonitrile as a Cyano Source

    No full text
    In classic cyanation reactions, toxic metal cyanide sources or complex organic cyanide sources are often used. Therefore, it is particularly important to develop a green and economical cyano source. Initially, 4-biphenylyl trifluoromethanesulfonate is chosen as the model substrate. Through extensive screening of catalysts, ligands, additives, reductant, temperature and other conditions, the optimal conditions are obtained (Ni(OTf)2, 1, 3-bis (diphenyphosphino)propane, Zn(OTf)2, Zn with a mole fraction of 0.1, 0.1, 0.2, 2, respectively, 0.7 mL CH3CN, N2, 60 h, 100 ℃). Subsequently, the generation and limitations of the substrates are studied under optimal conditions. It is found that substrates bearing electron-donating substituents exhibit an excellent efficiency for the cyanation of aryl trifluoromethanesulfonates. The cyanation of aryl trifluoromethanesulfonates is first realized under the catalysis of nickel with acetonitrile as a green and economical cyano source

    Text Embedding Bank for Detailed Image Paragraph Captioning

    No full text
    Existing deep learning-based models for image captioning typically consist of an image encoder to extract visual features and a language model decoder, an architecture that has shown promising results in single high-level sentence generation. However, only the word-level guiding signal is available when the image encoder is optimized to extract visual features. The inconsistency between the parallel extraction of visual features and sequential text supervision limits its success when the length of the generated text is long (more than 50 words). We propose a new module, called the Text Embedding Bank (TEB), to address this problem for image paragraph captioning. This module uses the paragraph vector model to learn fixed-length feature representations from a variable-length paragraph. We refer to the fixed-length feature as the TEB. This TEB module plays two roles to benefit paragraph captioning performance. First, it acts as a form of global and coherent deep supervision to regularize visual feature extraction in the image encoder. Second, it acts as a distributed memory to provide features of the whole paragraph to the language model, which alleviates the long-term dependency problem. Adding this module to two existing state-of-the-art methods achieves a new state-of-the-art result on the paragraph captioning Stanford Visual Genome dataset

    Hydrochloric Acid-Promoted Intermolecular 1,2-Thiofunctionalization of Aromatic Alkenes

    No full text
    An efficient method for making 1,2-thiofunctionalized products via the difunctionalization of aromatic alkenes was developed. In this method, cheap and readily available hydrochloric acid was used to promote 1,2-thiofunctionalization of aryl alkenes with <i>N</i>-arylsulfenylphthalimide and different types of nucleophiles. Importantly, extension of nucleophiles can reach aryl ethers, indoles, and carboxylic acids with good reactivity. This practical and convenient method has broad substrate scope and high yields under metal-free and mild conditions. Furthermore, we achieved conversion and application for making sulfoxide and sulfone by oxidation

    Cu-Catalyzed Cyanation of Indoles with Acetonitrile as a Cyano Source

    No full text
    Cu-catalyzed cyanation of indoles with acetonitrile for the synthesis of 3-cyanoindoles has been developed. The Cu/TEMPO/(Me<sub>3</sub>Si)<sub>2</sub> system has been identified to promote highly efficient and selective C–H cyanation of indoles by use of unactivated acetonitrile as a cyano source via a sequential iodination/cyanation process in one pot. This reaction furnishes 3-cyanoindoles in moderate to good yields and tolerates a series of functional groups. Moreover, low-cost copper catalysts and hazardless acetonitrile as a cyano source feature the practicability of this reaction

    Cu-Catalyzed Direct Amidation of Aromatic C–H Bonds: An Access to Arylamines

    No full text
    A Cu-catalyzed aromatic C–H amidation with phthalimide under oxygen as a terminal oxidant without using additional additives has been achieved. This reaction has the broad substrate scope and shows moderate to good yields in most cases. This method is complementary to the previously reported metal-catalyzed C–H amination systems
    corecore